Goto

Collaborating Authors

 Bucks County


The Slop Candidate

The Atlantic - Technology

For me, it's the amber glow of the fry machine gently illuminating the exhausted 45th president of the United States of America. The glare of the potato-warming apparatus casts a shadow on the left side of Donald Trump's face as he works at a McDonald's in Bucks County, Pennsylvania. This man, who held the nuclear codes just 1,369 days ago, is now wearing an apron and doling out fast food. The images of Trump's McDonald's stunt--in which he jiggled the fryer and handed burgers out of a window yesterday--are uncanny. There's Trump, face contorted in the appearance of deep concentration, tilting a fry basket to the heavens; Trump hanging two-thirds of the way out a drive-through window, waving like a beleaguered Norman Rockwell character; Trump, mouth agape, appearing to yell into the middle distance of a fast-food parking lot.


She was accused of faking an incriminating video of teenage cheerleaders. She was arrested, outcast and condemned. The problem? Nothing was fake after all

The Guardian

Madi Hime is taking a deep drag on a blue vape in the video, her eyes shut, her face flushed with pleasure. The 16-year-old exhales with her head thrown back, collapsing into laughter that causes smoke to billow out of her mouth. The clip is grainy and shaky – as if shot in low light by someone who had zoomed in on Madi's face – but it was damning. Madi was a cheerleader with the Victory Vipers, a highly competitive "all-star" squad based in Doylestown, Pennsylvania. The Vipers had a strict code of conduct; being caught partying and vaping could have got her thrown out of the team. And in July 2020, an anonymous person sent the incriminating video directly to Madi's coaches. Eight months later, that footage was the subject of a police news conference. "The police reviewed the video and other photographic images and found them to be what we now know to be called deepfakes," district attorney Matt Weintraub told the assembled journalists at the Bucks County courthouse on 15 March 2021. Someone was deploying cutting-edge technology to tarnish a teenage cheerleader's reputation. The vaping video was just one of many disturbing communications brought to the attention of Hilltown Township police department, Weintraub said. Madi had been receiving messages telling her she should kill herself. Her mother, Jennifer Hime, had told officers someone had been taking images from Madi's social media and manipulating them "to make her appear to be drinking".


Translating Embeddings for Modeling Multi-relational Data

Neural Information Processing Systems

We consider the problem of embedding entities and relationships of multirelational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples.


GRAM: Global Reasoning for Multi-Page VQA

Blau, Tsachi, Fogel, Sharon, Ronen, Roi, Golts, Alona, Ganz, Roy, Avraham, Elad Ben, Aberdam, Aviad, Tsiper, Shahar, Litman, Ron

arXiv.org Artificial Intelligence

The increasing use of transformer-based large language models brings forward the challenge of processing long sequences. In document visual question answering (DocVQA), leading methods focus on the single-page setting, while documents can span hundreds of pages. We present GRAM, a method that seamlessly extends pre-trained single-page models to the multi-page setting, without requiring computationally-heavy pretraining. To do so, we leverage a single-page encoder for local page-level understanding, and enhance it with document-level designated layers and learnable tokens, facilitating the flow of information across pages for global reasoning. To enforce our model to utilize the newly introduced document-level tokens, we propose a tailored bias adaptation method. For additional computational savings during decoding, we introduce an optional compression stage using our C-Former model, which reduces the encoded sequence length, thereby allowing a tradeoff between quality and latency. Extensive experiments showcase GRAM's state-of-the-art performance on the benchmarks for multi-page DocVQA, demonstrating the effectiveness of our approach.


The Era of Retail Biometric AI Crime Has Begun

#artificialintelligence

Each new technology seems to offer creative tools for both criminals and law enforcement. Telegraphs and telephones spawned wire and phone fraud. The era of digital networking ushered in the age of hacking. Digital banking allowed distance identity theft. Technically sophisticated criminals have been writing self-learning algorithms for decades that probe for system weaknesses, search out vulnerable information, and breaks poorly designed passcodes.


Pennsylvania Woman Accused of Using Deepfake Technology to Harass Cheerleaders

#artificialintelligence

In an arrest affidavit, the Hilltown Township Police Department in Bucks County accused Raffaela Marie Spone, 50, of cyberbullying three teenagers she knew at the Victory Vipers cheerleading gym in Doylestown, Pa., about 35 miles north of Philadelphia. Police officials said that over the summer, Ms. Spone had sent anonymous text messages from several fake phone numbers to the cheerleaders, their parents and the gym owners. The police suspect that the altered media was created through deepfake technology, which is becoming both more sophisticated and accessible, playing into experts' concerns that it can be used to harass or commit crimes. With deepfake technology, people can take a still image and map it onto an existing video to disparagingly alter the appearance of someone. "This technology is not only very prevalent, but easy to use," said Matt Weintraub, the Bucks County district attorney, whose office has been overseeing the case.


Five Recruiting Trends for the New Decade

#artificialintelligence

Traditional job interviews leave something to be desired. Relying on hiring managers to handle them without injecting some level of personal bias is fraught with risk, and the fact that many busy managers would rather not spend time interviewing candidates in the first place only compounds the problem. Enter the Tengai HR robot. This 16-inch-tall robotic device gets right to the task without stopping to engage in small talk with each applicant. The robot asks critical questions succinctly with no emotion, no bias and no preconceived notions.


Demystifying Differentiable Programming: Shift/Reset the Penultimate Backpropagator

Wang, Fei, Wu, Xilun, Essertel, Gregory, Decker, James, Rompf, Tiark

arXiv.org Machine Learning

Deep learning has seen tremendous success over the past decade in computer vision, machine translation, and gameplay. This success rests in crucial ways on gradient-descent optimization and the ability to learn parameters of a neural network by backpropagating observed errors. However, neural network architectures are growing increasingly sophisticated and diverse, which motivates an emerging quest for even more general forms of differentiable programming, where arbitrary parameterized computations can be trained by gradient descent. In this paper, we take a fresh look at automatic differentiation (AD) techniques, and especially aim to demystify the reverse-mode form of AD that generalizes backpropagation in neural networks. We uncover a tight connection between reverse-mode AD and delimited continuations, which permits implementing reverse-mode AD purely via operator overloading and without any auxiliary data structures. We further show how this formulation of AD can be fruitfully combined with multi-stage programming (staging), leading to a highly efficient implementation that combines the performance benefits of deep learning frameworks based on explicit reified computation graphs (e.g., TensorFlow) with the expressiveness of pure library approaches (e.g., PyTorch).


Translating Embeddings for Modeling Multi-relational Data

Bordes, Antoine, Usunier, Nicolas, Garcia-Duran, Alberto, Weston, Jason, Yakhnenko, Oksana

Neural Information Processing Systems

We consider the problem of embedding entities and relationships of multi-relational data in low-dimensional vector spaces. Our objective is to propose a canonical model which is easy to train, contains a reduced number of parameters and can scale up to very large databases. Hence, we propose, TransE, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities. Despite its simplicity, this assumption proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Besides, it can be successfully trained on a large scale data set with 1M entities, 25k relationships and more than 17M training samples.